Dynamic programming principle and associated Hamilton-Jacobi-Bellman equation for stochastic recursive control problem with non-Lipschitz aggregator

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dynamic Programming Principle for One Kind of Stochastic Recursive Optimal Control Problem and Hamilton--Jacobi--Bellman Equation

Abstract. In this paper, we study one kind of stochastic recursive optimal control problem with the obstacle constraints for the cost function where the cost function is described by the solution of one reflected backward stochastic differential equations. We will give the dynamic programming principle for this kind of optimal control problem and show that the value function is the unique visco...

متن کامل

A Transformation Method for Solving the Hamilton{--}jacobi{--}bellman Equation for a Constrained Dynamic Stochastic Optimal Allocation Problem

We propose and analyse a method based on the Riccati transformation for solving the evolutionary Hamilton–Jacobi–Bellman equation arising from the dynamic stochastic optimal allocation problem. We show how the fully nonlinear Hamilton–Jacobi– Bellman equation can be transformed into a quasilinear parabolic equation whose diffusion function is obtained as the value function of a certain parametr...

متن کامل

Local Solutions to the Hamilton – Jacobi – Bellman Equation in Stochastic Problems of Optimal Control

We suggest a method for solving control problems by using linear stochastic systems with functionals qua-dratic in the phase variable and constraints on the absolute values of control actions. Many problems of optimal control of mechanical systems under random perturbations can be written in the form (1) Here, the w = w i (s) are independent Wiener processes; the ν = ν i are independent Poisson...

متن کامل

Hamilton-Jacobi-Bellman Equations

This work treats Hamilton-Jacobi-Bellman equations. Their relation to several problems in mathematics is presented and an introduction to viscosity solutions is given. The work of several research articles is reviewed, including the Barles-Souganidis convergence argument and the inaugural papers on mean-field games. Original research on numerical methods for Hamilton-Jacobi-Bellman equations is...

متن کامل

Stochastic Perron's method for Hamilton-Jacobi-Bellman equations

We show that the value function of a stochastic control problem is the unique solution of the associated Hamilton-Jacobi-Bellman (HJB) equation, completely avoiding the proof of the so-called dynamic programming principle (DPP). Using Stochastic Perron's method we construct a super-solution lying below the value function and a sub-solution dominating it. A comparison argument easily closes the ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ESAIM: Control, Optimisation and Calculus of Variations

سال: 2018

ISSN: 1292-8119,1262-3377

DOI: 10.1051/cocv/2017016